72 research outputs found

    The P-ART framework for placement of virtual network services in a multi-cloud environment

    Get PDF
    Carriers network services are distributed, dynamic, and investment intensive. Deploying them as virtual network services (VNS) brings the promise of low-cost agile deployments, which reduce time to market new services. If these virtual services are hosted dynamically over multiple clouds, greater flexibility in optimizing performance and cost can be achieved. On the flip side, when orchestrated over multiple clouds, the stringent performance norms for carrier services become difficult to meet, necessitating novel and innovative placement strategies. In selecting the appropriate combination of clouds for placement, it is important to look ahead and visualize the environment that will exist at the time a virtual network service is actually activated. This serves multiple purposes clouds can be selected to optimize the cost, the chosen performance parameters can be kept within the defined limits, and the speed of placement can be increased. In this paper, we propose the P-ART (Predictive-Adaptive Real Time) framework that relies on predictive-deductive features to achieve these objectives. With so much riding on predictions, we include in our framework a novel concept-drift compensation technique to make the predictions closer to reality by taking care of long-term traffic variations. At the same time, near real-time update of the prediction models takes care of sudden short-term variations. These predictions are then used by a new randomized placement heuristic that carries out a fast cloud selection using a least-cost latency-constrained policy. An empirical analysis carried out using datasets from a queuing-theoretic model and also through implementation on CloudLab, proves the effectiveness of the P-ART framework. The placement system works fast, placing thousands of functions in a sub-minute time frame with a high acceptance ratio, making it suitable for dynamic placement. We expect the framework to be an important step in making the deployment of carrier-grade VNS on multi-cloud systems, using network function virtualization (NFV), a reality.This publication was made possible by NPRP grant # 8-634-1-131 from the Qatar National Research Fund (a member of Qatar Foundation), National Science Foundation, USA � CNS-1718929 and National Science Foundation, USA � CNS-1547380 .Scopu

    Fair Selection of Edge Nodes to Participate in Clustered Federated Multitask Learning

    Full text link
    Clustered federated Multitask learning is introduced as an efficient technique when data is unbalanced and distributed amongst clients in a non-independent and identically distributed manner. While a similarity metric can provide client groups with specialized models according to their data distribution, this process can be time-consuming because the server needs to capture all data distribution first from all clients to perform the correct clustering. Due to resource and time constraints at the network edge, only a fraction of devices {is} selected every round, necessitating the need for an efficient scheduling technique to address these issues. Thus, this paper introduces a two-phased client selection and scheduling approach to improve the convergence speed while capturing all data distributions. This approach ensures correct clustering and fairness between clients by leveraging bandwidth reuse for participants spent a longer time training their models and exploiting the heterogeneity in the devices to schedule the participants according to their delay. The server then performs the clustering depending on predetermined thresholds and stopping criteria. When a specified cluster approximates a stopping point, the server employs a greedy selection for that cluster by picking the devices with lower delay and better resources. The convergence analysis is provided, showing the relationship between the proposed scheduling approach and the convergence rate of the specialized models to obtain convergence bounds under non-i.i.d. data distribution. We carry out extensive simulations, and the results demonstrate that the proposed algorithms reduce training time and improve the convergence speed while equipping every user with a customized model tailored to its data distribution.Comment: To appear in IEEE Transactions on Network and Service Management, Special issue on Federated Learning for the Management of Networked System

    Blockchain technologies to mitigate COVID-19 challenges : a scoping review

    Get PDF
    Background: As public health strategists and policymakers explore different approaches to lessen the devastating effects of novel coronavirus disease (COVID-19), blockchain technology has emerged as a resource that can be utilized in numerous ways. Many blockchain technologies have been proposed or implemented during the COVID-19 pandemic; however, to the best of our knowledge, no comprehensive reviews have been conducted to uncover and summarise the main feature of these technologies. Objective: This study aims to explore proposed or implemented blockchain technologies used to mitigate the COVID-19 challenges as reported in the literature. Methods: We conducted a scoping review in line with guidelines of PRISMA Extension for Scoping Reviews (PRISMA-ScR). To identify relevant studies, we searched 11 bibliographic databases (e.g., EMBASE and MEDLINE) and conducted backward and forward reference list checking of the included studies and relevant reviews. The study selection and data extraction were conducted by 2 reviewers independently. Data extracted from the included studies was narratively summarised and described. Results: 19 of 225 retrieved studies met eligibility criteria in this review. The included studies reported 10 used cases of blockchain to mitigate COVID-19 challenges; the most prominent use cases were contact tracing and immunity passports. While the blockchain technology was developed in 10 studies, its use was proposed in the remaining 9 studies. The public blockchain technology was the most commonly utilized type in the included studies. All together, 8 different consensus mechanisms were used in the included studies. Out of 10 studies that identified the used platform, 9 studies used Ethereum to run the blockchain. Solidity was the most prominent programming language used in developing blockchain technology in the included studies. The transaction cost was reported in only 4 of the included studies and varied between USD 10−10 and USD 5. The expected latency and expected scalability were not identified in the included studies. Conclusion: Blockchain technologies are expected to play an integral role in the fight against the COVID-19 pandemic. Many possible applications of blockchain were found in this review; however, most of them are not mature enough to reveal their expected impact in the fight against COVID-19. We encourage governments, health authorities, and policymakers to consider all blockchain applications suggested in the current review to combat COVID-19 challenges. There is a pressing need to empirically examine how effective blockchain technologies are in mitigating COVID-19 challenges. Further studies are required to assess the performance of blockchain technologies’ fight against COVID-19 in terms of transaction cost, scalability, and/or latency when using different consensus algorithms, platforms, and access types

    A Combined Decision for Secure Cloud Computing Based on Machine Learning and Past Information

    No full text
    Cloud computing has been presented as one of the most efficient techniques for hosting and delivering services over the internet. However, even with its wide areas of application, cloud security is still a major concern of cloud computing. In order to protect the communication in such environment, many secure systems have been proposed and most of them are based on attack signatures. These systems are often not very efficient for detecting all the types of attacks. Recently, machine learning technique has been proposed. This means that if the training set does not include enough examples in a particular class, the decision may not be accurate. In this paper, we propose a new firewall scheme named Enhanced Intrusion Detection and Classification (EIDC) system for secure cloud computing environment. EIDC detects and classifies the received traffic packets using a new combination technique called most frequent decision where the nodes' 11In this document we will use the words 'node' and 'user' interchangeably.past decisions are combined with the current decision of the machine learning algorithm to estimate the final attack category classification. This strategy increases the learning performance and the system accuracy. To generate our results, a public available dataset UNSW-NB-15 is used. Our results show that EICD improves the anomalies detection by 24% compared to complex tree.This publication was made possible by the NPRP award [NPRP 8-634-1-131] from the Qatar National Research Fund (a member of The Qatar Foundation)Scopu

    Reinforcement learning approaches for efficient and secure blockchain-powered smart health systems

    No full text
    Emerging technological innovation toward e-Health transition is a worldwide priority for ensuring people's quality of life. Hence, secure exchange and analysis of medical data amongst diverse organizations would increase the efficiency of e-Health systems toward elevating medical phenomena such as outbreaks and acute patients' disorders. However, medical data exchange is challenging since issues, such as privacy, security, and latency may arise. Thus, this paper introduces Healthchain-RL, an adaptive, intelligent, consortium, and secure Blockchain-powered health system employing artificial intelligence, especially Deep Reinforcement Learning (DRL). Blockchain and DRL technologies show their robust performance in different fields, including healthcare systems. The proposed Healthchain-RL framework aggregates heterogeneous healthcare organizations with different requirements using the power of Blockchain while maintaining an optimized framework via an online intelligent decision-making RL algorithm. Hence, an intelligent Blockchain Manager (BM) was proposed based on the DRL, mainly Deep Q-Learning and it is variations, to optimizes the Blockchain network's behavior in real-time while considering medical data requirements, such as urgency and security levels. The proposed BM works toward intelligently changing the blockchain configuration while optimizing the trade-off between security, latency, and cost. The optimization model is formulated as a Markov Decision Process (MDP) and solved effectively using three RL-based techniques. These three techniques are Deep Q-Networks (DQN), Double Deep Q-Networks (DDQN), and Dueling Double Deep Q-Networks (D3QN). Finally, a comprehensive comparison is conducted between the proposed techniques and two heuristic approaches. The proposed strategies converge in real-time adaptivity to the system status while maintaining maximum security and minimum latency and cost. 2021 Elsevier B.V.Qatar Foundation;Qatar National Research FundScopu

    DroneRF dataset: A dataset of drones for RF-based detection, classification and identification

    Get PDF
    Modern technology has pushed us into the information age, making it easier to generate and record vast quantities of new data. Datasets can help in analyzing the situation to give a better understanding, and more importantly, decision making. Consequently, datasets, and uses to which they can be put, have become increasingly valuable commodities. This article describes the DroneRF dataset: a radio frequency (RF) based dataset of drones functioning in different modes, including off, on and connected, hovering, flying, and video recording. The dataset contains recordings of RF activities, composed of 227 recorded segments collected from 3 different drones, as well as recordings of background RF activities with no drones. The data has been collected by RF receivers that intercepts the drone's communications with the flight control module. The receivers are connected to two laptops, via PCIe cables, that runs a program responsible for fetching, processing and storing the sensed RF data in a database. An example of how this dataset can be interpreted and used can be found in the related research article “RF-based drone detection and identification using deep learning approaches: an initiative towards a large open source drone database” (Al-Sa'd et al., 2019).This publication was supported by Qatar University Internal Grant No. QUCP-CENG-2018/2019-1 . The work of Aiman Erbad is supported by grant number NPRP 7-1469-1-273 . The findings achieved herein are solely the responsibility of the authors.Scopu

    Smart Edge Healthcare Data Sharing System

    No full text
    Smart health systems improve the efficiency of healthcare infrastructures and biomedical systems by integrating information and technology into health and medical practices. However, reliability, scalabilty and latency are among the many challenges hindering the realization of next-generation healthcare. In fact, with the exponential increases in the volume of patient data being produced and processed, many healthcare system' are being overwhelmed with the deluge of data they are facing. Many systems have been proposed to improve the system latency and scalability. However, there are concerns related to some of theses systems regarding the increasing levels of required human interaction which impact their efficiency. Recently, machine learning techniques are gaining a lot of interest in health applications as they exhibit fast processing with realtime predictions. In this paper, we propose a new healthcare system to reduce the waiting time in emergency department and improve the network scalability in any distributed system. The proposed model integrates the power of edge computing with machine learning techniques for providing a good quality of healthcare services. The machine learning algorithm is used to generate a classifier that can predict with high levels of accuracy the likelyhood of a patient to have a heart attack using his physiological signals ECG. The proposed system stores the patient data in a centralized database and generates a unique index using a new data-dependent Indexing algorithm that transforms the patient data into unique code to be sent for any medical data exchange. Multiple machine learning algorithms are studied and the best algorithm will be selected based on efficient performance result for the prediction of heart attack problem. simulation results show that the proposed model can effectively detect the abnormal heart beats with 91% using SVM algorithm. We show also that the proposed system outperforms conventional indexing algorithm systems in terms of collisions rate. 2020 IEEE.Qatar National Research FundScopu

    QoE-aware distributed cloud-based live streaming of multisourced multiview videos

    No full text
    Video streaming is one of the most prevailing and bandwidth consuming Internet applications today. Advancements in technology and prevalence of video capturing devices result in massive multi-sourced (aka crowdsourced) live video broadcasting over the Internet. A single scene may be captured by multiple spectators from different angles (views), enabling an opportunity for interactive multiview video by integrating these individually captured views. Such multi-sourced multiview video offers more realistic and immersive experience of a scene. In this paper, we present a Quality of Experience (QoE) driven, cost effective Crowdsourced Multiview Live Streaming (CMLS) system. The CMLS aims to minimize the overall system cost by selecting optimal cloud site for video transcoding and the number of representations, based on the view popularity and viewer's available bandwidth. In addition, we present a QoE metric considering delay and received video quality. We formulate the selection of optimal cloud site and number of representations to meet the required QoE as a resource allocation problem using Integer Programming (IP). Moreover, we present a Greedy Minimal Cost (GMC) algorithm to perform resource allocation efficiently. We use real live video traces collected from three large-scale live video providers (Twitch.tv, YouTube Live, and YouNow) to evaluate our proposed strategy. We evaluate the GMC algorithm considering the overall cost, QoE, video quality, and average latency between viewers and transcoding location. We compare our results with the optimal solution and the state-of-the art policy used in a popular video steaming system. Our results demonstrate that the GMC achieves near optimal results and substantially outperforms the state-of-the art policy.This publication was made possible by NPRP grant # [ 8-519-1-108 ] from the Qatar National Research Fund (a member of Qatar Foundation). We are thankful to the Denny Stohr for providing YouNow dataset. The findings achieved herein are solely the responsibility of the author[s].Scopu

    Convolutional Autoencoder Approach for EEG Compression and Reconstruction in m-Health Systems

    No full text
    In the last few years, the number of patients with chronic diseases requiring constant monitoring increased rapidly, which motivates researchers to develop scalable remote health applications. Nevertheless, the amount of transmitted real-time data through current dynamic networks with limited and restricted bandwidth, end-to-end delay, and transmission power; limits having an efficient transmission of the data. Motivated by the high energy consumed for transmission, applying data reduction techniques to the vital signs at the transmitter side present an efficient edge-based approach that significantly reduces the transmission energy. However, a new problem arises, which is the ability of receiving the data at the server side with an acceptable distortion rate (i.e., deformation of vital signs because of inefficient data reduction). In this paper, we introduce a Deep Learning (DL) approach based on Convolutional Auto-Encoder (CAE), to compress and reconstruct the vital signs in general and Electroencephalogram Signal (EEG) specifically with minimum distortion. The results show that using CAE provides efficient distortion rate while maximizing compression ratio. However, learning makes CAE application-specific, where each CAE model is designed specifically for a certain application.This work was made possible by NPRP grant # 7 - 684 - 1 -127 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors
    corecore